Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Blockchain enhanced lightweight node model
ZHAO Yulong, NIU Baoning, LI Peng, FAN Xing
Journal of Computer Applications    2020, 40 (4): 942-946.   DOI: 10.11772/j.issn.1001-9081.2019111917
Abstract765)      PDF (632KB)(691)       Save
The inherent chain structure of blockchain means that its data volume grows linearly and endlessly. Over time,it causes a lot of pressure on the storage of the single node,which greatly wastes the storage space of the whole system. The Simplified Payment Verification(SPV)node model proposed in the Bitcoin white paper greatly reduces the node's need for storage space. However,it reduces the number of nodes and increases the pressure,which weakens the decentralization of the entire system and has security risks such as denial of service attacks and witch attacks. By analyzing the Bitcoin block data,a fully functional enhanced lightweight node model Enhanced SPV(ESPV)was proposed. The block was divided into new blocks and old blocks by ESPV,and different storage management strategies were adopted for them. The new block was saved in full copy(one copy per node)for transaction verification,allowing ESPV to has transaction verification(mining) function with less storage space cost. The old block was stored in the nodes of the network in slices,and was accessed through the hierarchical block partition routing table,thereby reducing the waste of the storage space of the system under the premise of ensuring data availability and reliability. The ESPV nodes have full node functionality,thus ensuring the decentralization of the blockchain system and enhancing the security and stability of the system. The experimental results show that the ESPV nodes have more than 80% transaction verification rate,and the data volume and growth amount of these nodes are only 10% of those of all nodes. The data availability and reliability of ESPV are guaranteed,and it is applicable to the whole life cycle of the system.
Reference | Related Articles | Metrics
Newton-soft threshold iteration algorithm for robust principal component analysis
WANG Haipeng, JIANG Ailian, LI Pengxiang
Journal of Computer Applications    2020, 40 (11): 3133-3138.   DOI: 10.11772/j.issn.1001-9081.2020030375
Abstract316)      PDF (3222KB)(486)       Save
Aiming at Robust Principal Component Analysis (RPCA) problem, a Newton-Soft Threshold Iteration (NSTI) algorithm was proposed for reducing time complexity of RPCA algorithm. Firstly, the NSTI algorithm model was constructed by using the sum of the Frobenius norm of the low-rank matrix and the l 1-norm of the sparse matrix. Secondly, two different optimization methods were used to calculate different parts of the model at the same time. Newton method was used to quickly calculate the low-rank matrix. Soft threshold iteration algorithm was used to quickly calculate the sparse matrix. The decomposition of low-rank matrix and sparse matrix of original data was calculated by alternately using the two optimization methods. Finally, the low-rank features of the original data were obtained. Under the condition that the data scale is 5 000×5 000 and rank of the low-rank matrix is 20, NSTI algorithm can improve the time efficiency by 24.6% and 45.5% compared with Gradient Descent (GD) algorithm and Low-Rank Matrix Fitting (LMaFit) algorithm. For foreground and background separation of 180-frame video, NSTI takes 3.63 s and has the time efficiency 78.7% and 82.1% higher than GD algorithm and LMaFit algorithm. In the experiment of image denoising, NSTI algorithm takes 0.244 s, and the residual error of the image processed by NSTI and the original image is 0.381 3, showing that the time efficiency and the accuracy of the proposed algorithm are 64.3% more efficient and 45.3% more accurate than those of GD algorithm and LMaFit algorithm. Experimental results prove that NSTI algorithm can effectively solve the RPCA problem and improve the time efficiency of the RPCA algorithm.
Reference | Related Articles | Metrics
Virus propagation suppression model in mobile wireless sensor networks
WU Sanzhu, LI Peng, WU Sanbin
Journal of Computer Applications    2020, 40 (1): 129-135.   DOI: 10.11772/j.issn.1001-9081.2019040736
Abstract465)      PDF (889KB)(296)       Save
To better control the propagation of virus in mobile wireless sensor networks, an improved dynamics model of virus propagation was established according to the theory of infectious diseases. The dead nodes were put into the network, and the communication radius, the moving and staying states of virus nodes during the propagation process were also added. Then the differential equations were established aiming at the model, and the existence and stability of equilibrium point were analyzed. The control and extinction conditions of virus propagation were gotten. Furthermore, the effects of following factors on virus propagation in mobile wireless sensor networks were analyzed:node communication radius, moving velocity, density, immunity rate of susceptible nodes, virus detection rate of infected nodes and node mortality. Finally, the simulation results show that adjusting the parameters in the model can effectively suppress the virus propagation in mobile wireless sensor networks.
Reference | Related Articles | Metrics
Dynamic cloud data audit model based on nest Merkle Hash tree block chain
ZHOU Jian, JIN Yu, HE Heng, LI Peng
Journal of Computer Applications    2019, 39 (12): 3575-3583.   DOI: 10.11772/j.issn.1001-9081.2019040764
Abstract620)      PDF (1372KB)(304)       Save
Cloud storage is popular to users for its high scalability, high reliability, and low-cost data management. However, it is an important security problem to safeguard the cloud data integrity. Currently, providing public auditing services based on semi-trusted third party is the most popular and effective cloud data integrity audit scheme, but there are still some shortcomings such as single point of failure, computing power bottlenecks, and low efficient positioning of erroneous data. Aiming at these defects, a dynamic cloud data audit model based on block chain was proposed. Firstly, distributed network and consensus algorithm were used to establish a block chain audit network with multiple audit entities to solve the problems of single point of failure and computing power bottlenecks. Then, on the guarantee of the reliability of block chain, chameleon Hash algorithm and nest Merkle Hash Tree (MHT) structure were introduced to realize the dynamic operation of cloud data tags in block chain. Finally, by using nest MHT structure and auxiliary path information, the efficiency of erroneous data positioning was increased when error occurring in audit procedure. The experimental results show that compared with the semi-trusted third-party cloud data dynamic audit scheme, the proposed model significantly improves the audit efficiency, reduces the data dynamic operation time cost and increases the erroneous data positioning efficiency.
Reference | Related Articles | Metrics
Automatic recognition algorithm of cervical lymph nodes using adaptive receptive field mechanism
QIN Pinle, LI Pengbo, ZHANG Ruiping, ZENG Jianchao, LIU Shijie, XU Shaowei
Journal of Computer Applications    2019, 39 (12): 3535-3540.   DOI: 10.11772/j.issn.1001-9081.2019061069
Abstract417)      PDF (965KB)(333)       Save
Aiming at the problem that the deep learning network model applied to medical image target detection only has a fixed receptive field and cannot effectively detect the cervical lymph nodes with obvious morphological and scale differences, a new recognition algorithm based on adaptive receptive field mechanism was proposed, applying deep learning to the automatic recognition of cervical lymph nodes in complete three-dimensional medical images at the first time. Firstly, the semi-random sampling method was used to crop the medical sequence images to generate the grid-based local image blocks and the corresponding truth labels. Then, the DeepNode network based on the adaptive receptive field mechanism was constructed and trained through the local image blocks and labels. Finally, the trained DeepNode network model was used for prediction. By inputting the whole sequence images, the cervical lymph node recognition results corresponding to the whole sequence was obtained end-to-end and quickly. On the cervical lymph node dataset, the cervical lymph node recognition using the DeepNode network has the recall rate of 98.13%, the precision of 97.38%, and the number of false positives per scan is only 29, and the time consumption is relatively shorter. The analysis of the experimental results shows that compared with current algorithms such as the combination of two-dimensional and three-dimensional convolutional neural networks, the general three-dimensional object detection and the weak supervised location based recognition, the proposed algorithm can realize the automatic recognition of cervical lymph nodes and obtain the best recognition results. The algorithm is end-to-end, simple and efficient, easy to be extended to three-dimensional target detection tasks for other medical images and can be applied to clinical diagnosis and treatment.
Reference | Related Articles | Metrics
Automatic recognition algorithm for cervical lymph nodes using cascaded fully convolutional neural networks
QIN Pinle, LI Pengbo, ZENG Jianchao, ZHU Hui, XU Shaowei
Journal of Computer Applications    2019, 39 (10): 2915-2922.   DOI: 10.11772/j.issn.1001-9081.2019030510
Abstract311)      PDF (1267KB)(291)       Save
The existing automatic recognition algorithms for cervical lymph nodes have low efficiency, and the overall false positive removal are unsatisfied, so a cervical lymph node detection algorithm using cascaded Fully Convolutional Neural Networks (FCNs) was proposed. Firstly, combined with the prior knowledge of doctors, the cascaded FCNs were used for preliminary identification, that was, the first FCN was used for extracting the cervical lymph node region from the Computed Tomography (CT) image of head and neck. Then, the second FCN was used to extract the lymph node candidate samples from the region, and merging them at the three-dimensional (3D) level to generate a 3D image block. Finally, the proposed feature block average pooling method was introduced into the 3D classification network, and the 3D input image blocks with different scales were classified into two classes to remove false positives. On the cervical lymph node dataset, the recall of cervical lymph nodes identified by cascaded FCNs is up to 97.23%, the classification accuracy of the 3D classification network with feature block average pooling can achieve 98.7%. After removing false positives, the accuracy of final result reaches 93.26%. Experimental results show that the proposed algorithm can realize the automatic recognition of cervical lymph nodes with high recall and accuracy, which is better than the current methods reported in the literatures; it is simple and efficient, easy to extend to other tasks of 3D medical images recognition.
Reference | Related Articles | Metrics
Cooperative caching strategy based on user preference for content-centric network
XIONG Lian, LI Pengming, CHEN Xiang, ZHU Hongmei
Journal of Computer Applications    2018, 38 (12): 3509-3513.   DOI: 10.11772/j.issn.1001-9081.2018051057
Abstract351)      PDF (815KB)(358)       Save
Nodes in the Content-Centric Network (CCN) cache all the passed content by default, without selective caching and optimally placing of the content. In order to solve the problems, a new Cooperative Caching strategy based on User Preference (CCUP) was proposed. Firstly, user's preference for content type and content popularity were considered as user's local preference indexes to realize the selection of cached content. Then, the differentiated caching strategy was executed on the content that needed to be cached, the globally active content was cached at the important central node, and the inactive content was cached according to the matching of the local preference and the distance level between node and user. Finally, the near access of user to local preference content and the quick distribution of global active content were both achieved. The simulation results show that, compared with typical caching strategies, such as LCE (Leave Copy Everywhere)、Prob(0.6) (Probabilistic caching with 0.6)、Betw (cache "less for more"), the proposed CCUP has obvious advantages in average cache hit rate and average request delay.
Reference | Related Articles | Metrics
Low rank non-linear feature selection algorithm
ZHANG Leyuan, LI Jiaye, LI Pengqing
Journal of Computer Applications    2018, 38 (12): 3444-3449.   DOI: 10.11772/j.issn.1001-9081.2018050954
Abstract405)      PDF (836KB)(348)       Save
Concerning the problems of high-dimensional data, such as non-linearity, low-rank form, and feature redundancy, an unsupervised feature selection algorithm based on kernel function was proposd, named Low Rank Non-linear Feature Selection algroithm (LRNFS). Firstly, the features of each dimension were mapped to a high-dimensional kernel space, and the non-linear feature selection in the low-dimensional space was achieved through the linear feature selection in the kernel space. Then, the deviation terms were introduced into the self-expression form, and the low rank and sparse processing of coefficient matrix were achieved. Finally, the sparse regularization factor of kernel matrix coefficient vector was introduced to implement the feature selection. In the proposed algorithm, the kernel matrix was used to represent its non-linear relationship, the global information of data was taken into account in low rank to perform subspace learning, and the importance of feature was determined by the self-expression form. The experimental results show that, compared with the semi-supervised feature selection algorithm via Rescaled Linear Square Regression (RLSR), the classification accuracy of the proposed algorithm after feature selection is increased by 2.34%. The proposed algorithm can solve the problem that the data is linearly inseparable in the low-dimensional feature space, and improve the accuracy of feature selection.
Reference | Related Articles | Metrics
Method for exploiting function level vectorization on simple instruction multiple data extensions
LI Yingying, GAO Wei, GAO Yuchen, ZHAI Shengwei, LI Pengyuan
Journal of Computer Applications    2017, 37 (8): 2200-2208.   DOI: 10.11772/j.issn.1001-9081.2017.08.2200
Abstract645)      PDF (1353KB)(438)       Save
Currently, two vectorization methods which exploit Simple Instruction Multiple Data (SIMD) parallelism are loop-based method and Superword Level Parallel (SLP) method. Focusing on the problem that the current compiler cannot realize function level vectorization, a method of function level vectorization based on static single assignment was proposed. Firstly, the variable properties of program were analysed, and then a set of compiling directives including SIMD function annotations, uniform clauses, linear clauses were used to realize function level vectorization. Finally, the vectorized code was optimized by using the variable attribute result. Some test cases from the field of multimedia and image processing were selected to test the function and performance of the proposed function level vectorization on Sunway platform. Compared with the scalar program execution results, the execution of the program after the function level vectorization is more efficient. The experimental results show that the function level vectorization can achieve the same effect of task level parallelism, which is instructive to realize the automatic function level vectorization.
Reference | Related Articles | Metrics
Continuous ultrasound image set segmentation method based on support vector machine
LIU Jun, LI Pengfei
Journal of Computer Applications    2017, 37 (7): 2089-2094.   DOI: 10.11772/j.issn.1001-9081.2017.07.2089
Abstract548)      PDF (1199KB)(442)       Save
A novel Support Vector Machine (SVM)-based unified segmentation model was proposed for segmenting a continuous ultrasound image set, because the traditional SVM-based segmenting method needed to extract sample points for each image to create a segmentation model. Firstly, the gray feature was extracted from the gray histogram of the image as the characteristic representing the continuity of the image in the image set. Secondly, part images were selected as the samples and the gray feature of each pixel was extracted. Finally, the gray feature of the pixel was combined with the feature of image sequence continuity in the image where each pixel was located. The SVM was used to train the segmentation model to segment the whole image set. The experimental results show that compared with the traditional SVM-based segmentation method, the new model can greatly reduce the workload of manually selecting the sample points when segmenting the image set with large quantity and continuous variation and guarantees the segmentation accuracy simultaneously.
Reference | Related Articles | Metrics
Fast intra hierarchical algorithm for high efficiency video coding based on texture property and spatial correlation
LI Peng, PENG Zongju, LI Chihang, CHEN Fen
Journal of Computer Applications    2016, 36 (4): 1085-1091.   DOI: 10.11772/j.issn.1001-9081.2016.04.1085
Abstract482)      PDF (1056KB)(383)       Save
In order to reduce the encoding complexity of the emerging High Efficiency Video Coding (HEVC), a fast hierarchical algorithm based on texture property and spatial correlation for intra coding was proposed. Firstly, Largest Coding Unit (LCU) level fast algorithm was adopted. The prediction depth of current LCU was derived by weighted prediction using the depth level of neighboring LCUs; the texture complexity of current LCU was determined by means of standard deviation and the strategy of adaptive threshold. Therefore, the Most Probable Depth Range (MPDR) of current LCU could be predicted by combining the prediction depth with the texture complexity of current LCU. Secondly, using Coding Unit (CU) level Depth Decision Fast Algorithm (CUDD-FA), combining the strategy of early decision of CU depth level based on edge-map with early termination of CU depth based on Rate Distortion (RD) cost correlation, the depth level of CU was determined before coding, and the complexity of intra coding process was further reduced. The experimental results show that, compared with the original HEVC encoding scheme, the proposed method reduces encoding time by an average of 41.81%, while the BD-rate (Bjøntegaard Delta bit rate) only increases by 0.74% and the BD-PSNR (Bjøntegaard Delta Peak Signal-to-Noise Rate) only decreases by 0.038 dB; compared with the representative literature algorithms, the proposed algorithm achieves better RD performance with saving more encoding time. The proposed method can significantly reduce the complexity of intra coding process in the premise of negligible RD performance loss, especially for video sequences with high resolution, which is beneficial to the real-time video applications of HEVC standard.
Reference | Related Articles | Metrics
Error analysis of unmanned aerial vehicle remote sensing images stitching based on simulation
LI Pengjun, LI Jianzeng, SONG Yao, ZHANG Yan, DU Yulong
Journal of Computer Applications    2015, 35 (4): 1116-1119.   DOI: 10.11772/j.issn.1001-9081.2015.04.1116
Abstract562)      PDF (702KB)(706)       Save

Concerning that the increasement of accumulated error causes serious distortion of Unmanned Aerial Vehicle (UAV) remote sensing images stitching, a projection error correction algorithm based on space intersection was proposed, Using space intersection theory, the spatial coordinates of 3D points were calculated according to correspondence points. Then all 3D points were orthographic projected onto the same space plane, and the orthographic points were projected onto the image plane to get corrected correspondence points, Finally, M-estimator Sample Consensus (MSAC) algorithm was used to estimate the homography matrix, then the stitching image was obtained. The simulation results show that this algorithm can effectively eliminate the projection error, thus achieve the purpose of inhibiting UAV remote sensing image stitching error.

Reference | Related Articles | Metrics
Shadow detection method based on shadow probability model for remote sensing images
LI Pengwei, GE Wenying, LIU Guoying
Journal of Computer Applications    2015, 35 (2): 510-514.   DOI: 10.11772/j.issn.1001-9081.2015.02.0510
Abstract612)      PDF (812KB)(366)       Save

The inhomogeneous spectral response of shadow area makes the shadow detection methods based on threshold always produce results with much difference with real situations. In order to overcome this problem, a new shadow probability model was proposed by combining opacity and intensity. To eliminate the neglection of interaction between neighboring pixels, a method based on multiresolution Markov Random Field (MRF) was proposed for shadow detection of remote sensing images. First, the proposed probability model was used to describe the shadow probability of pixels in the multiresolution images. Then, the Potts model was employed to model multiscale label fields. Finally, the detection result was obtained by Maximizing A Posteriori (MAP) probability. This method was compared with some shadow detection methods, e.g., the hue/intensity-based method, the difference dual-threshold method and Support Vector Machine (SVM) classifier. The experimental results reveal that the proposed method can improve the accuracy of shadow detection for high-resolution urban remote sensing images.

Reference | Related Articles | Metrics
Blowing state recognition of basic oxygen furnace based on feature of flame color texture complexity
LI Pengju, LIU Hui, WANG Bin, WANG Long
Journal of Computer Applications    2015, 35 (1): 283-288.   DOI: 10.11772/j.issn.1001-9081.2015.01.0283
Abstract428)      PDF (881KB)(407)       Save

In the process of converter blowing state recognition based on flame image recognition, flame color texture information is underutilized and state recognition rate still needs to be improved in the existing methods. To deal with this problem, a new converter blowing recognition method based on feature of flame color texture complexity was proposed. Firstly, the flame image was transformed into HSI color space, and was nonuniformly quantified; secondly, the co-occurrence matrix of H component and S component was computed in order to fuse color information of flame images; thirdly, the feature descriptor of flame texture complexity was calculated using color co-occurrence matrix; finally, the Canberra distance was used as similarity criteria to classify and identify blowing state. The experimental results show that in the premise of real-time requirements, the recognition rate of the proposed method is increased by 28.33% and 3.33% respectively, compared with the methods of Gray-level co-occurrence matrix and gray differential statistics.

Reference | Related Articles | Metrics
Fast image completion algorithm based on random correspondence
XIAO Mang LI Guangyao TAN Yunlan GENG Ruijin LV Yangjian XIE Li PENG Lei
Journal of Computer Applications    2014, 34 (6): 1719-1723.   DOI: 10.11772/j.issn.1001-9081.2014.06.1719
Abstract148)      PDF (793KB)(387)       Save

The traditional patch-based image completion algorithms circularly search the most similar patches in the whole image, and are easily affected by confidence factor in the process of structure propagation. As a result, these algorithms have poor efficiency and need a lot of time for the big computation. To overcome these shortages, a fast image completion algorithm based on randomized correspondence was proposed. It adopted a randomized correspondence algorithm to search the sample regions, which have similar structure and texture with the target region, so as to reduce the search space. Meanwhile, the method of computing filling priorities based on confidence factor and edge information was optimized to enhance the correctness of structure propagation. In addition, the method of calculating the most similar patches was improved. The experimental results show that, compared with the traditional algorithms, the proposed approach can obtain 5-10 times speed-up in repair rate, and performs better in image completion.

Reference | Related Articles | Metrics
Anti-collision algorithm based on priority grouping
ZHANG Cong-li PENG Xuan YANG Lei
Journal of Computer Applications    2012, 32 (12): 3490-3493.   DOI: 10.3724/SP.J.1087.2012.03490
Abstract810)      PDF (624KB)(507)       Save
Concerning the problems of low recognition efficiency and high misreading rate on the occasions of many tags which move fast, an anti-collision algorithm based on packet-first then handling-second was proposed. The algorithm reduced the misreading rate by to grouping the tags according to the order of arriving, it could adaptively adjust frame length based on slot situation to improve the search efficiency, and use jumping dynamic searching algorithm to deal with conflict slots, which could reduce the number of search for readers and the transmission of system. Matlab simulation results show that the algorithm’s communication complexity is lower than other commonly used algorithms, and throughput can achieve 0.59~0.6. The larger the number of tags is the more obvious superiority of the algorithm is.
Related Articles | Metrics
Visualization of human meridian based on graphic transformation
LI Peng-feng CHEN Xin
Journal of Computer Applications    2011, 31 (11): 3035-3037.   DOI: 10.3724/SP.J.1087.2011.03035
Abstract1058)      PDF (498KB)(449)       Save
This paper proposed a method for locating and displaying the human meridian directly in real-time. Firstly, a multi-channel meridian impedance detector and a magnetic orientation tracker were introduced in this paper to locate the position of meridian to gain the 3-Dimensional (3D) information of meridian. Secondly, camera calibration was done for the scene camera, and then its result and the meridian points' 3D information were transformed to the same world coordinate system according to their space relative position, so that the projection matrix of camera H was obtained. Finally, using this matrix H, the 3D data of meridian points was projected onto image to form the 2-Dimensional (2D) meridian line which was matched and fused with human body image captured by the camera to visualize the human meridian. The result shows that the method can locate and display the human meridian precisely and efficiently on the actual human body image in real-time.
Related Articles | Metrics
Flexible application framework for manufacturing execution systems
CHAI Yong-sheng,SUN Shu-dong,WU Xiu-li,LI Peng
Journal of Computer Applications    2005, 25 (03): 679-681.   DOI: 10.3724/SP.J.1087.2005.0679
Abstract743)      PDF (245KB)(824)       Save

For the complexity of constructing Manufacturing Execution Systems (MES), a flexible application framework was proposed, based on multi-layer application service architecture. Some advanced technologies, such as object-oriented analysis for business process, event-driven mechanism with business rules and business engineering analysis, were used to achieve reconfigurable organization, scalable business processes and customizable business rules, which increased the development flexibility of MES software systems. At last, one instance was analyzed and realized by using the framework.

Related Articles | Metrics